Goto

Collaborating Authors

 hard-core model


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper studies the worst case hardness of estimating the parameters of a binary pairwise undirected graphical model. By considering the specific case of the hard-core / independent set model and relying on the known result that approximating the partition function for this problem given the parameters (even for the unweighted case of all theta_i being 0) is hard, the authors show that the other direction -- namely, approximating the parameters given the node marginals -- is hard, in the sense that it does not admit an FPRAS. This is a strong theoretical paper, addressing a computational complexity problem that occurs very often in theory and practice, has been conjectured to be hard, but hadn't yet been shown formally to be hard (to the best of my and the authors' knowledge). The work is unlikely to have much impact in practice.



Hardness of parameter estimation in graphical models

Guy Bresler, David Gamarnik, Devavrat Shah

Neural Information Processing Systems

We consider the problem of learning the canonical parameters specifying an undirected graphical model (Markov random field) from the mean parameters. For graphical models representing a minimal exponential family, the canonical parameters are uniquely determined by the mean parameters, so the problem is feasible in principle. The goal of this paper is to investigate the computational feasibility of this statistical task. Our main result shows that parameter estimation is in general intractable: no algorithm can learn the canonical parameters of a generic pair-wise binary graphical model from the mean parameters in time bounded by a polynomial in the number of variables (unless RP = NP). Indeed, such a result has been believed to be true (see [1]) but no proof was known. Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters. Our reduction entails showing that the marginal polytope boundary has an inherent repulsive property, which validates an optimization procedure over the polytope that does not use any knowledge of its structure (as required by the ellipsoid method and others).


Hardness of parameter estimation in graphical models Guy Bresler 1 David Gamarnik 2 Devavrat Shah

Neural Information Processing Systems

We consider the problem of learning the canonical parameters specifying an undirected graphical model (Markov random field) from the mean parameters. For graphical models representing a minimal exponential family, the canonical parameters are uniquely determined by the mean parameters, so the problem is feasible in principle. The goal of this paper is to investigate the computational feasibility of this statistical task. Our main result shows that parameter estimation is in general intractable: no algorithm can learn the canonical parameters of a generic pair-wise binary graphical model from the mean parameters in time bounded by a polynomial in the number of variables (unless RP = NP). Indeed, such a result has been believed to be true (see [1]) but no proof was known. Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters. Our reduction entails showing that the marginal polytope boundary has an inherent repulsive property, which validates an optimization procedure over the polytope that does not use any knowledge of its structure (as required by the ellipsoid method and others).